Search Results: "pik"

18 January 2015

Guido G nther: whatmaps 0.0.9

I have released whatmaps 0.0.9 a tool to check which processes map shared objects of a certain package. It can integrate into apt to automatically restart services after a security upgrade. This release fixes the integration with recent systemd (as in Debian Jessie), makes logging more consistent and eases integration into downstream distributions. It's available in Debian Sid and Jessie and will show up in Wheezy-backports soon. This blog is flattr enabled.

17 January 2015

Guido G nther: krb5-auth-dialog 3.15.4

To keep up with GNOMEs schedule I've released krb5-auth-dialog 3.15.4. The changes of 3.15.1 and 3.15.4 include among updated translations, the replacement of deprecated GTK+ widgets, minor UI cleanups and bug fixes a header bar fix that makes us only use header bar buttons iff the desktop environment has them enabled: krb5-auth-dialog with header bar krb5-auth-dialog without header bar This makes krb5-auth-dialog better ingtegrated into other desktops again thanks to mclasen's awesome work. This blog is flattr enabled.

23 December 2014

Michael Stapelberg: Debian Code Search: taming the latency tail

It s been a couple of weeks since I ve launched Debian Code Search Instant, so people have had the chance to use it for a while and that gives me plenty of data points to look at :-). For every query, I log the search term itself as well as the duration the query took to execute. That way, I can easily identify queries that take a long time and see why that is. There is a class of queries for which Debian Code Search (DCS) doesn t perform so well, and that s queries that consist of trigrams which are extremely common. Whenever DCS receives such a query, it needs to search through a lot of files. Note that it doesn t really matter if there are plenty of results or not it s the number of files that potentially contain a result which matters. One such query is arse (we get a lot of curse words). It consists of only two trigrams ( ars and rse ), which are extremely common in program source code. As a couple of examples, the terms parse , sparse , charset and coarse are all matched by that. As an aside, if you really want to search for just arse , use word boundaries, i.e. \barse\b , which also makes the query significantly faster. Fixing the overloaded frontend When DCS first received the query, arse would lead to our frontend server crashing. That was due to (intentionally) unoptimized code we were aggregating all search results from all 6 source backends in memory, sorted them, and then wrote them out to disk. I addressed this in commit d2922fe92 with the following measures:
  1. Instead of keeping the entire result in memory, just write the result to a temporary file on disk ( unsorted.json ) and store pointers into that file in memory, i.e. (offset, length) tuples. In order to sort the results, we also need to store the ranking and the path (to resolve ties and thereby guarantee a stable result order over multiple search queries). For grouping the results by source package, we need to keep the package name.
  2. If you think about it, you don t need the entire path in order to break a tie the hash is enough, as it defines an ordering. That ordering may be different, but any ordering is good enough for the purpose of merely breaking a tie in a deterministic way. I m using Go s hash/fnv, the only non-cryptographic (fast!) hash function that is included in Go s standard library.
  3. Since this was still leading to Out Of Memory errors, I decided to not store a copy of the package name in each search result, but rather use a pointer into a string pool containing all package names. The number of source package names is relatively small, in the order of 20,000, whereas the number of search results can be in the millions. Using the stringpool is a clear win the overhead in the case where #results < #srcpackages is negligible, but as soon as #results > #srcpackages, you save memory.
With all of that fixed, the query became at all possible, albeit with a runtime of around 20 minutes. Double Writing When running such a long-running query, I noticed that the query ran smooth for a while, but then it took multiple seconds without any visible progress at the end of the query before the results appeared. This was due to the frontend ranking the results and then converting unsorted.json into actual result pages. Since we provide results ordered by ranking, but also results grouped by source packages, it was writing every result twice to disk. What s even worse is that due to re-ordering, every read was essentially random (as opposed to sequential reads). What s even worse is that nobody will ever click through all the hundreds of thousands of result pages, so they are prepared entirely in vain. Therefore, with commit c744b236e I made the frontend generate these result pages on demand. This cut down the time for the ranking phase at the end of each query from 20-30 seconds (for big queries) to typically less than one second. Profiling/Monitoring After adding monitoring to each of the source-backends, I realized that during these long-running queries, the disk I/O and network I/O was nowhere near my expectations: each source-backend was sending only a low single-digit number of megabytes per second back to the frontend (typically somewhere between 1 MB/s and 3 MB/s). This didn t match up at all with the bandwidth I observed in earlier performance tests, so I used wget -O /dev/null to send a query and discard the result in order to get some theoretical performance numbers. Suddenly, I was getting more than 10 MB/s worth of results, maxing out the disks with a read rate of about 200 MB/s. So where is the bottleneck? I double-checked that neither the CPU on any of our VMs, nor the network between them was saturated. Note that as of this point, the CPU of the frontend was at roughly 70% (of one core), which didn t seem a lot to me. Then, I followed this excellent tutorial on profiling Go programs to see where the frontend is spending its time. Turns out, the biggest consumer of CPU time was the encoding/json Go package, which is used for deserializing results received from the backend and serializing them again before sending them to the client. Since I was curious about it for a while already, I decided to give cap n proto a try to replace JSON as serialization mechanism for communication between the source backends and the frontend. Switching to it (commit 8efd3b41) brought down the CPU load immensely, and made the query a bit faster. In addition, I killed the next biggest consumer: the lseek(2) syscall, which we used to call with SEEK_CUR and an offset of 0 so that it would tell us the current position. This was necessary in the first place because we don t know in advance how many bytes we re going to write when serializing a result to disk. The replacement is a neat little trick:
type countingWriter int64
func (c *countingWriter) Write(p []byte) (n int, err error)  
    *c += countingWriter(len(p))
    return len(p), nil
 
// [ ]
// Then, use an io.MultiWriter like this:
var written countingWriter
w := io.MultiWriter(s.tempFiles[backendidx], &written)
result.WriteJSON(w)
With some more profiling, the new bottleneck was suddenly the read(2) syscall, issued by the cap n proto deserialization, operating directly on the network connection buffer. strace revealed that crunching through the results of one source backend for a long query, read(2) was called about 250,000 times. By simply using a buffered reader (commit 684467ae), I could reduce that to about 2,000 times. Another bottleneck was the fact that for every processed result, the frontend needed to update the query state, which is shared amongst all goroutines (there is one goroutine for each source backend). All that parallelism isn t very effective if you need to synchronize the state updates in the end. So with commit 5d46a572, I refactored the state to be per-backend, so that locking is only necessary for the first couple of results, and the vast vast majority of results can be processed entirely without locking. This brought down the query time from 20 minutes to about 5 minutes, but I still wasn t happy with the bandwidth: the frontend was doing a bit over 10 MB/s of reads from all source backends combined, whereas with wget I could get around 40 MB/s with the same query. At this point, the CPU utilization was around 7% of one core on the frontend, and profiling didn t immediately reveal an obvious culprit. After a bit of experimenting (by commenting out code here and there ;-)), I figured out that the problem was that the frontend was still converting these results from capnproto buffers to JSON. While that doesn t take a lot of CPU time, it delays the network stream from the source-backend: once the local and remote TCP buffers are full, the source-backend will (intentionally!) not continue with its search, so that it doesn t run out of memory. I m still convinced that s a good idea, and in fact I was able to solve the problem in an elegant way: instead of writing JSON to disk and generating result pages on demand, we now write capnproto directly to disk (commit 466b7f3e) and convert it to JSON only before sending out the result pages. That decreases the overall CPU time since we only need to convert a small fraction of the results to JSON, but most importantly, the frontend is now not in the critical path anymore. It can directly pass the data through, and in fact it uses an io.TeeReader to do exactly that. Conclusion With all of these optimizations, we re now down to about 2.5 minutes for the search query arse , and the architecture of the system actually got simpler to reason about. Most importantly, though, the optimizations don t only play out for a single query, but for many different queries. I ve deployed the optimized version at the 15th of December 2014, and you can see that the 99th, 95th and 90th percentile latency dropped significantly, i.e. there are a lot fewer spikes than before, and more queries are processed faster, which is particularly obvious in the third graph (which is capped at 2 minutes):


9 December 2014

Wouter Verhelst: Playing with ExtreMon

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring. By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too. Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some. And collectd doesn't do the nagios thing of actually telling you when things go down. So what if you could see it when things go bad? At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...
well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?
which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code. As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere. While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do. You will need: First, we clone the ExtreMon git repository:
git clone https://github.com/m4rienf/ExtreMon.git extremon
cd extremon
There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to Make sure the dump.py script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things. Next, we build the java libraries (which we'll need for the display thing later on):
cd java/extremon
mvn install
cd ../client/
mvn install
This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository. We'll now build the display frontend. This is maintained in a separate repository:
cd ../..
git clone https://github.com/m4rienf/ExtreMon-Display.git display
cd display
mvn install
This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:
  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.
If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:
keytool -genkeypair -alias extremontest
after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader. Now, we will sign the .jar file:
jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest
There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course. You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites. Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory. Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to barbershop.grep.be by the location where you've stored your signed .jar file. Add the chalice_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right. Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication chalice_in_http is an input mechanism. Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right. Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring. Now run javaws https://url/x3console.jnlp to start Extremon-Display. At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in. Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver. The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out. Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd. The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that. Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead end and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot. Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

5 November 2014

Carl Chenet: Send the same short message on Twitter, Pump.io, Diaspora* and a lot more

Follow me on Identi.ca or Twitter or Diaspora*diaspora-banner This is a feedback about installing a self hosted instance of Friendica on a Debian server (Jessie). If you re not interested in why I use Friendica, just go to Prerequisite for Friendica section below. Frustration about social networks Being a huge user of short messages, I was quite frustated to spend so much time on my Twitter account. To be quite honest, there is no much I like about this social network, except the huge population of people being potentially interested in what I write. twitter-logo I also have been using Identi.ca (now powered by Pump.io) for a while. But I tried for a while to manage both networks Pump.io and Twitter by hand and it was quite painful. And something was telling me another social network was going to appear from nowhere one of these days and I ll be just horrible to try to keep it up this way. identica-logo So I was looking for a scalable solution not asking too much personal investment. Subscribing to Diaspora* some days ago on the Framasphere pod, tintouli told me to try Friendica. Hmmm, what s Friendica ? f-intro Friendica is a content manager you can plug on almost anything: social networks (Facebook, Twitter, Pump.io, Diaspora*, ), but also WordPress, XMPP, emails I m in fact just discovering the power of this tool but to plug it on my different social network accounts was quite a good use case for me. And I guess if you re still reading, for you too. I tried to use some shared public servers but I was not quite happy with the result, one connector was still missing or public servers were really unstable. So I m at last self hosting my Friendica. Here is how. Prerequisite for Friendica You need to install the following packages: # apt-get install apache2 libapache2-mod-php5 php5 php5-curl php5-gd php5-mysql mysql-server git Having a already self-modified /etc/php5/apache2/php.ini, I encountered a small issue with libCurl and had to manually add the following line in the php.ini: extension=curl.so Setting up MySQL Connect to MySQL and create an empty database with a dedicated user: # mysql -u root -pV3rYS3cr3t -e create database friendica; GRANT ALL PRIVILEGES ON friendica.* TO friendica@localhost IDENTIFIED BY R3AlLyH4rdT0Gu3ss mysql Setting up Apache My server hosts several services, so I use a subdomain friendica.mydomain.com. If you use a subdomain, of course check you do have declared this subdomain in your DNS zone. apache-logo I use SSL encryption with a wildcard certificate for all my subdomains. My Friendica data are stored in /var/www/friendica. Here is my virtual host configuration for Friendica stored in the file /etc/apache2/sites-available/friendicassl.conf : <VirtualHost *:443>
ServerName friendica.mydomain.com
DocumentRoot /var/www/friendica/
DirectoryIndex index.php index.html
ErrorLog /var/log/apache2/friendica-error-ssl.log
TransferLog /var/log/apache2/friendica-access-ssl.log SSLEngine on
SSLCertificateFile /etc/ssl/certs/mydomain/mydomain.com.crt
SSLCertificateKeyFile /etc/ssl/private/mydomain.com.key
SSLVerifyClient None <Directory /var/www/friendica/>
AllowOverride All
Options FollowSymLinks
Order allow,deny
Allow from all
</Directory>
</VirtualHost> After writing the configuration file, just launch the following commands and it should be good for the Apache configuration: # a2ensite friendicassl && /etc/init.d/apache2 reload friendica-trust Setting up Friendica Get the master zip file of Friendica, copy it on your server and decompress it. Something like : # cd /var/www/ && wget https://github.com/friendica/friendica/archive/master.zip && unzip master.zip && mv friendica-master friendica You need to give www-data (Apache user) the rights to write in /var/www/friendica/view/smarty3/ : # chown -R www-data:www-data /var/www/friendica/view/smarty3 && chmod -R ug+w var/www/friendica/view/smarty3 Ok, I guess we re all set, lets launch the installation process! Using your web browser, connect to friendica.mydomain.com. First step you ll see the installation window which checks the prerequisite before installing. Complete if something is missing.
friendica-installation

First window of the Friendica installation process

Second step asks the host/user/password of the database, complete and the installation process starts. Hopefully all goes just fine. Next you ll have to create a /var/www/friendica/.htconfig.php with the content that the last page of the installation process provides. Just copy/paste, check the rights of this file and now you can connect again to see the register page of friendica at the url https://friendlica.mydomain.com/register . Pretty cool! Register a user That s a fairly easy step. You just need to check before that your server is able to send emails, because the password is going to be sent to you by email. If it is ok, you should now identify on the welcome page of friendica and access your account. That s a huge step to broadcast your short messages everywhere, but we have some last steps before being able to send your short messages on all the social networks we need. A small break, create an app for twitter on apps.twitter.com To send your short messages to Twitter, you need to create an app on apps.twitter.com. Just check you re logged in Twitter and connect to apps.twitter.com. Create an app called with a unique name (apparently), then go to the Keys and Access tokens page, note the consumer key and the consumer secret. You ll later need the name of the app, the consumer key and the consumer secret. friendica-decentralized-network Install and configure the addons Friendica uses an addon system in order to plug on the different third-parties it needs. We are going to configure Twitter, Pump.io and the Diaspora* plug. Let s go back to our server and launches some commands: # cd /tmp && git clone https://github.com/friendica/friendica-addons.git && cd friendica-addons # tar xvf twitter.tgz -C /var/www/friendica/addon # tar xvf pumpio.tgz -C /var/www/friendica/addon # cp -a diaspora /var/www/friendica/addon You need to modify your /var/www/friendica/.htconfig.php file and add the following content at the end: // names of your addons, separated by a comma $a->config['system']['addon'] = pumpio, twitter, diaspora ; // your Twitter consumer key
$a->config['twitter']['consumerkey'] = P4Jl2Pe4j7Lj91eIn0AR8vIl2 ; // your Twitter consumer secret
$a->config['twitter']['consumersecret'] = 1DnVkllPik9Ua8jW4fncxwtXZJbs9iFfI5epFzmeI8VxM9pqP1 ; // you Twitter app name $a->config['twitter']['application_name'] = whatever-twitter ; banner_friendica Connect again to Friendica. Go to settings => social networks, you will see the options for Twitter, Pump.io and Diaspora*. Complete the requested information for each of them. Important options you should not forget to check are: Pump.io Twitter Diaspora* Done? Now it s time to send your first broadcasted short message. Yay! Send a short message to your different social networks Connect to Friendica, click on the network page, write your short message in the Share box. Click on the lock lock, you ll see the following setup: friendica-post-default It means your short messages will be broadcasted to the three networks. Or more, it s up to you! That s my setup, feel free to modify. Now close the lock window and send your message. For me it takes some time to appear on Twitter and Diaspora* and it immediatly appears on Identi.ca. Last words friendica-stand Friendica offers to take back the control of your data, by broadcasting content on different media from a single source. While self hosting, you keep your data whatever happends and are not subject to companies losing your data like recently Twitpic. Moreover the philosophy behind Friendica pushed me to dig and test the solution What about you? How do you proceed to broadcast your short messages? Does Friendica offer a good solution in your opinion? Are you interested in the philosophy behind this project? Feel free to share your thoughs in the comments. LAST MINUTE: hey, this article is on Hacker News, don t hesitate to vote for it if you liked it!

23 October 2014

Erich Schubert: Clustering 23 mio Tweet locations

To test scalability of ELKI, I've clustered 23 million Tweet locations from the Twitter Statuses Sample API obtained over 8.5 months (due to licensing restrictions by Twitter, I cannot make this data available to you, sorry.
23 million points is a challenge for advanced algorithms. It's quite feasible by k-means; in particular if you choose a small k and limit the number of iterations. But k-means does not make a whole lot of sense on this data set - it is a forced quantization algorithm, but does not discover actual hotspots.
Density-based clustering such as DBSCAN and OPTICS are much more appropriate. DBSCAN is a bit tricky to parameterize - you need to find the right combination of radius and density for the whole world. Given that Twitter adoption and usage is quite different it is very likely that you won't find a single parameter that is appropriate everywhere.
OPTICS is much nicer here. We only need to specify a minimum object count - I chose 1000, as this is a fairly large data set. For performance reasons (and this is where ELKI really shines) I chose a bulk-loaded R*-tree index for acceleration. To benefit from the index, the epsilon radius of OPTICS was set to 5000m. Also, ELKI allows using geodetic distance, so I can specify this value in meters and do not get much artifacts from coordinate projection.
To extract clusters from OPTICS, I used the Xi method, with xi set to 0.01 - a rather low value, also due to the fact of having a large data set.
The results are pretty neat - here is a screenshot (using KDE Marble and OpenStreetMap data, since Google Earth segfaults for me right now):
Screenshot of Clusters in central Europe
Some observations: unsurprisingly, many cities turn up as clusters. Also regional differences are apparent as seen in the screenshot: plenty of Twitter clusters in England, and low acceptance rate in Germany (Germans do seem to have objections about using Twitter; maybe they still prefer texting, which was quite big in Germany - France and Spain uses Twitter a lot more than Germany).
Spam - some of the high usage in Turkey and Indonesia may be due to spammers using a lot of bots there. There also is a spam cluster in the ocean south of Lagos - some spammer uses random coordinates [0;1]; there are 36000 tweets there, so this is a valid cluster...
A benefit of OPTICS and DBSCAN is that they do not cluster every object - low density areas are considered as noise. Also, they support clusters of different shape (which may be lost in this visualiation, which uses convex hulls!) and different size. OPTICS can also produce a hierarchical result.
Note that for these experiments, the actual Tweet text was not used. This has a rough correspondence to Twitter popularity "heatmaps", except that the clustering algorithms will actually provide a formalized data representation of activity hotspots, not only a visualization.
You can also explore the clustering result in your browser - the Google Drive visualization functionality seems to work much better than Google Earth.
If you go to Istanbul or Los Angeles, you will see some artifacts - odd shaped clusters with a clearly visible spike. This is caused by the Xi extraction of clusters, which is far from perfect. At the end of a valley in the OPTICS plot, it is hard to decide whether a point should be included or not. These errors are usually the last element in such a valley, and should be removed via postprocessing. But our OpticsXi implementation is meant to be as close as possible to the published method, so we do not intend to "fix" this.
Certain areas - such as Washington, DC, New York City, and the silicon valley - do not show up as clusters. The reason is probably again the Xi extraction - these region do not exhibit the steep density increase expected by Xi, but are too blurred in their surroundings to be a cluster.
Hierarchical results can be found e.g. in Brasilia and Los Angeles.
Compare the OPTICS results above to k-means results (below) - see why I consider k-means results to be a meaningless quantization?
k-means clusters
Sure, k-means is fast (30 iterations; not converged yet. Took 138 minutes on a single core, with k=1000. The parallel k-means implementation in ELKI took 38 minutes on a single node, Hadoop/Mahout on 8 nodes took 131 minutes, as slow as a single CPU core!). But you can see how sensitive it is to misplaced coordinates (outliers, but mostly spam), how many "clusters" are somewhere in the ocean, and that there is no resolution on the cities? The UK is covered by 4 clusters, with little meaning; and three of these clusters stretch all the way into Bretagne - k-means clusters clearly aren't of high quality here.
If you want to reproduce these results, you need to get the upcoming ELKI version (0.6.5~201410xx - the output of cluster convex hulls was just recently added to the default codebase), and of course data. The settings I used are:
-dbc.in coords.tsv.gz
-db.index tree.spatial.rstarvariants.rstar.RStarTreeFactory
-pagefile.pagesize 500
-spatial.bulkstrategy SortTileRecursiveBulkSplit
-time
-algorithm clustering.optics.OPTICSXi
-opticsxi.xi 0.01
-algorithm.distancefunction geo.LngLatDistanceFunction
-optics.epsilon 5000.0 -optics.minpts 1000
-resulthandler KMLOutputHandler -out /tmp/out.kmz
and the total runtime for 23 million points on a single core was about 29 hours. The indexes helped a lot: less than 10000 distances were computed per point, instead of 23 million - the expected speedup over a non-indexed approach is 2400.
Don't try this with R or Matlab. Your average R clustering algorithm will try to build a full distance matrix, and you probably don't have an exabyte of memory to store this matrix. Maybe start with a smaller data set first, then see how long you can afford to increase the data size.

30 August 2014

John Goerzen: 2AM to Seattle

Monday morning, 1:45AM. Laura and I walk into the boys room. We turn on the light. Nothing happens. (They re sound sleepers.) Boys, it s time to get up to go get on the train! Four eyes pop open. Yay! Oh I m so excited! And then, Meow! (They enjoy playing with their stuffed cats that Laura got them for Christmas.) Before long, it was out the door to the train station. We even had time to stop at a donut shop along the way. We climbed into our family bedroom (a sleeping car room on Amtrak specifically designed for families of four), and as the train started to move, the excitement of what was going on crept in. Yes, it s 2:42AM, but these are two happy boys: 2014-08-04 02 Jacob and Oliver love trains, and this was the beginning of a 3-day train trip from Newton to Seattle that would take us through Kansas, Colorado, the Rocky Mountains of New Mexico, Arizona, Los Angeles, up the California coast, through the Cascades, and on to Seattle. Whew! Here we are later that morning before breakfast: IMG_3776 Here s our train at a station stop in La Junta, CO: IMG_3791 And at the beautiful small mountain town of Raton, NM: IMG_3805 Some of the passing scenery in New Mexico: IMG_3828 Through it all, we found many things to pass the time. I don t think anybody was bored. I took the boys exploring the train several times we d walk from one end to the other and see what all was there. There was always the dining car for our meals, the lounge car for watching the passing scenery, and on the Coast Starlight, the Pacific Parlor Car. Here we are getting ready for breakfast one morning. IMG_3830 Getting to select meals and order in the train restaurant was a big deal for the boys. IMG_3832 Laura brought one of her origami books, which even managed to pull the boys away from the passing scenery in the lounge car for quite some time. IMG_3848 Origami is serious business: IMG_3869 They had some fun wrapping themselves around my feet and challenging me to move. And were delighted when I could move even though they were trying to weight me down! IMG_3880 Several games of Uno were played, but even those sometimes couldn t compete with the passing scenery: IMG_3898 The Coast Starlight features the Pacific Parlor Car, which was built over 50 years ago for the Santa Fe Hi-Level trains. They ve been updated; the upper level is a lounge and small restaurant, and the lower level has been turned into a small theater. They show movies in there twice a day, but most of the time, the place is empty. A great place to go with little boys to run around and play games. IMG_3896 The boys and I sort of invented a new game: roadrunner and coyote, loosely based on the old Looney Tunes cartoons. Jacob and Oliver would be roadrunners, running around and yelling MEEP MEEP! Meanwhile, I was the coyote, who would try to catch them even briefly succeeding sometimes but ultimately fail in some hilarious way. It burned a lot of energy. And, of course, the parlor car was good for scenery-watching too: IMG_3908 We were right along the Pacific Ocean for several hours sometimes there would be a highway or a town between us and the beach, but usually there was nothing at all between us and the coast. It was beautiful to watch the jagged coastline go by, to gaze out onto the ocean, watching the birds apparently so beautiful that I didn t even think to take some photos. Laura s parents live in California, and took a connecting train. I had arranged for them to have a sleeping car room near ours, so for the last day of the trip, we had a group of 6. Here are the boys with their grandparents at lunch Wednesday: 2014-08-06 11 We stepped off the train in Seattle into beautiful King Street Station. P8100197 Our first day in Seattle was a quiet day of not too much. Laura s relatives live near Lake Washington, so we went out there to play. The boys enjoyed gathering black rocks along the shore. IMG_3956 We went blackberry picking after that filled up buckets for a cobbler. The next day, we rode the Seattle Monorail. The boys have been talking about this for months a kind of train they ve never been on. That was the biggest thing in their minds that they were waiting for. They got to ride in the very front, by the operator. P8080073 Nice view from up there. P8080078 We walked through the Pike Market I hadn t been in such a large and crowded place like that since I was in Guadalajara: P8080019 At the Seattle Aquarium, we all had a great time checking out all the exhibits. The please touch one was a particular hit. P8080038 Walking underneath the salmon tank was fun too. We spent a couple of days doing things closer to downtown. Laura s cousin works at MOHAI, the Museum of History and Industry, so we spent a morning there. The boys particularly enjoyed the old periscope mounted to the top of the building, and the exhibit on chocolate (of course!) P8100146 They love any kind of transportation, so of course we had to get a ride on the Seattle Streetcar that comes by MOHAI. P8090094 All weekend long, we had been noticing the seaplanes taking off from Lake Washington and Lake Union (near MOHAI). So finally I decided to investigate, and one morning while Laura was doing things with her cousin, the boys and I took a short seaplane ride from one lake to another, and then rode every method of transportation we could except for ferries (we did that the next day). Here is our Kenmore Air plane: P8100100 The view of Lake Washington from 1000 feet was beautiful: P8100109 I think we got a better view than the Space Needle, and it probably cost about the same anyhow. P8100117 After splashdown, we took the streetcar to a place where we could eat lunch right by the monorail tracks. Then we rode the monorail again. Then we caught a train (it went underground a bit so it was a subway to them!) and rode it a few blocks. There is even scenery underground, it seems. P8100151 We rode a bus back, and saved one last adventure for the next day: a ferry to Bainbridge Island. 2014-08-11 14 2014-08-11 16 Laura and I even got some time to ourselves to go have lunch at an amazing Greek restaurant to celebrate a year since we got engaged. It s amazing to think that, by now, it s only a few months until our wedding anniversary too! There are many special memories of the weekend I could mention visiting with Laura s family, watching the boys play with her uncle s pipe organ (it s in his house!), watching the boys play with their grandparents, having all six of us on the train for a day, flying paper airplanes off the balcony, enjoying the cool breeze on the ferry and the beautiful mountains behind the lake. One of my favorites is waking up to high-pitched Meow? Meow meow meow! Wake up, brother! sorts of sounds. There was so much cat-play on the trip, and it was cute to hear. I have the feeling we won t hear things like that much more. So many times on the trip I heard, Oh dad, I am so excited! I never get tired of hearing that. And, of course, I was excited, too.

7 April 2014

Andrew Pollock: [life] Day 69: Walk to King Island, a picnic at Wellington Point, the long slow acquisition of some linseed and a split lip

Today was a really good day, right up until the end, when it wasn't so good, but could have been a whole lot worse, so I'm grateful for that. I've been wanting to walk out to King Island at low tide with Zoe for a while, but it's taken about a month to get the right combination of availability, weather and low tide timing to make it possible. Today, there was a low tide at about 10:27am, which I thought would work out pretty well. I wasn't sure if the tide needed to be dead low to get to King Island, so I thought we could get there a bit early and possibly follow the tide out. I invited Megan and Jason to join us for the day and make a picnic of it. It turned out that we didn't need a really low tide, the sand bar connecting King Island to Wellington Point was well and truly accessible well before low tide was reached, so we headed out as soon as we arrived. I'd brought Zoe's water shoes, but from looking at it, thought it would be walkable in bare feet. We got about 10 metres out on the sand and Zoe started freaking out about crabs. I think that incident with the mud crab on Coochiemudlo Island has left her slightly phobic of crabs. So I went back to Jason's car and got her water shoes. I tried to allay her fears a bit by sticking my finger in some of the small holes in the sand, and even got her to do it too. I'm actually glad that I did get her water shoes, because the shell grit got a bit sharp and spiky towards King Island, so I probably would have needed to carry her more than I did otherwise. Along the way to the island we spotted a tiny baby mud crab, and Zoe was brave enough to hold it briefly, so that was good. We walked all the way out and partially around the island and then across it before heading back. The walk back was much slower because where was a massive headwind. Zoe ran out of steam about half way back. She didn't like the sand getting whipped up and stinging her legs, and the wind was forcing the brim of her hat down, so I gave her a ride on my shoulders for the rest of the way back. We had some lunch after we got back to Wellington Point, and Zoe found her second wind chasing seagulls around the picnic area. After an ice cream, we went over to the playground and the girls had a great time playing. It was a pretty good park. There was this huge tree with a really big, thick, horizontal branch only about a metre or two off the ground. All the kids were climbing on it and then shimmying along the branch to the trunk. Zoe's had a few climbs in trees and seems not afraid of it, so she got up and had a go. She did really well and did a combination of scooting along, straddling the branch and doing a Brazilian Jiu-Jitsu-style "bear crawl" along the branch. It was funny seeing different kids' limits. Zoe was totally unfazed by climbing the tree. Megan was totally freaking out. But when it came to walking in bare feet in an inch of sea water, Zoe wanted to climb up my leg like a rat up a rope, in case there were crabs. Each to their own. Zoe wanted to have a swim in the ocean, so I put her into her swimsuit, but had left the water shoes back in the car. Once again, she freaked out about crabs as soon as we got ankle deep in the water, and was freaking out Megan as well, so the girls elected to go back to playing in the park. After a good play in the park, we headed back home. We'd carpooled in Jason's truck, with both girls in the back. I'd half expected Zoe to fall asleep on the way back, but the girls were very hyped up and had a great time playing games and generally being silly in the back. When we got back to our place, Jason was in need of a coffee, so we walked to the Hawthorne Garage and had coffee and babyccinos, before Megan and Jason went home. It was about 3:30pm at this point, and I wanted to make a start on dinner. I was making a wholemeal pumpkin quiche, which I've made a few times before, and I discovered we were low on linseed. I thought I'd push things and see if Zoe was up for a scooter ride to the health food shop to get some more and kill some time. She was up for it, but ran out of steam part way across Hawthorne Park. Fortunately she was okay with walking and didn't want me to carry her and the scooter. It took us about an hour to get to the health food shop. Zoe immediately remembered the place from the previous week where we'd had to stop for a post-meltdown pit stop and declared she needed to go to the toilet again. We finally made it out of the shop. I wasn't looking forward to the long walk back home, but there were a few people waiting for a bus at the bus stop near the health food shop, and on checking the timetable, the bus was due in a couple of minutes, so we just waited for the bus. That drastically shortened the trip back. Zoe managed to drop the container of linseed on the way home from the bus stop, but miraculously the way it landed didn't result in the loss of too much of the contents, it just split the container. So I carefully carried the container home the rest of the way. By this stage it was quite a bit later than I had really wanted to be starting dinner, but we got it made, and Zoe really liked the pumpkin quiche, and ate a pretty good dinner. It was after dinner when things took a turn for the worse. Zoe was eating an ice block for dessert, and for whatever reason, she'd decided to sit in the corner of the kitchen next to the dishwasher, while I was loading it. I was carrying over one of the plates, and the knife managed to fall off the plate, bounce off the open dishwasher door and hit her in the mouth, splitting her lip. Zoe was understandably upset, and I was appalled that the whole thing had happened. She never sits on the kitchen floor, let alone in the corner where the dishwasher is. And this knife came so close to her eye. Fortunately the lip didn't look too bad. It stopped bleeding quickly, and we kept some ice on it and the swelling went down. I hate it when accidents happen on my watch. I feel like I'm fighting the stigma of the incompetent single Dad, or the abusive single Dad, so when Zoe sustains an injury to the face like a fat lip, which could be misinterpreted, I, well, really hate it. This was such a freak accident, and it could have gone so much worse. I'm just so glad she's okay. Zoe recovered pretty well from it, and I was able to brush her teeth without aggravating her lip. She went to bed well, and I suspect she's going to sleep really well. It's a bit cooler tonight, so I'm half-expecting a sleep in in the morning with any luck.

21 December 2013

Christian Perrier: [life] Running update December 21st 2013

Last time I blogged about my running activities was after DebConf 13 in Switzerland, back in August. At that time, I just completed two great moutain races in one month (Mont-Blanc Marathon, then EDF Cenis Tour, one being 42km and 2500m positive climb and the other one being 50km and 2700m). EDF Cenis Tour was my best result overall in a trail race, being ranked 40th out of more than 300 runners and 3rd in my age category (men 50-59). So, in late August, I was preparing for my "autumn challenge", a succession of 3 long distance races in a row: Quite a challenge, indeed, to run 3 long distance events in a row, with only 3 weeks between them. Preparation for all this was mostly piling up kilometers over kilometers. First in road and flat training, when the goal was the marathon. So, after piking to 471 kilometers in August (more than 15km/day), I ran 452 in September and again 420 in October. During that preparation, I also broke my personal best in half-marathon, down to 1h34. I therefore was perfectly fit for the Toulouse Marathon and, indeed, unsurprisingly, I achieved my first goal by breaking my personal best down to.....3h25' and a few seconds. Really a great achievement and something that gives me a little hope of being able to qualify for Boston Marathon (though chances are a bit low again: I can apply because I'm below 3h30 but the chances that I get a seat are not very big). Recovery from the marathon was easy, thanks to the big preparation and then the second race came very quickly: Le Puy-Firminy. My third participation to this night race, organised by a cousin of mine. I completed the first one in 9h15....then, last year, the second one in 8h25. And, this year, well......7h15 and 26th out of over 200 runners. Huge, great and stunning performance for me, really. Everything went so well that I can't remember any moment where I had any doubt. And I will indeed remember the last kilometer ran along with my sister (who is also a runner) and where she.....couldn't follow me while I was sprinting at about 14km/h after 68 kilometers. Definitely my best race this year. And then came the last challenge: Saint lyon. If you never saw it, you have no idea. While Le Puy-Firminy features 200 runners and 150 walkers over 68 kilometers, this one features 6000 runners for 75 kilometers. Six THOUSAND. The vision of a light snake, kilometers long, over the hills in St-Etienne neighbourhood, was stunning. Moreover, we ran that one, with snow, ice and cold (down to -10 C at the highest point of the race). So, here, the goal was running with my friend and share the joy of the race with her, all along...and eventually beat her personal best on this race (11h15 last year, while the race was 5km shorter). We made it really well, despite the crowd and the fact that it doesn't allow running one's real speed. Despite the ice and snow that makes downhills really.....interesting. Despite dolors I had in my legs at the end of the race. We completed the race in 10h38, quite away from our secret goal (9h30) but that one was really ambitious...:-). And we crossed the finish line together, hand in hand, for the third time of our running life. And we shared tears at the end of the race. And we shared many great moments over that week-end. Moments that let one remember what the definition of "friendship" is. I found my "running sister" in Sabine and this is something we really appreciate and is hard to explain. All this would make a great conclusion if.....I hadn't injured my leg with this accumulation. Indeed, as one might expect after such a hard challenge, I discovered during the days that followed the last race, that I have a fatigue fracture on my left tibia. As a consequence, I need to stop running for about 6 weeks....which you can understand is kinda hard for me. But, guess what? I'm allowed to bike...:-). So, well, I repaired my old moutain bike and now, I'm biking instead of running...:-) Finally, 2013 has been my greatest running year, again. 4700 kilometers ran over the year, nearly 13km/day in average. 14 races (3 road races and 11 trail races). 463 hours spent running and 72,000meters climbed (about 8 Sagarmatha, aka Everest, climbed). 2 Personal Best performances. 2 long distance races which I ran in more than 1 hour less then last year. 2 new moutain races. And first running injury...:-) 2014 will be different. I will run again some races I alreay ran such as Paris 80km Ecotrail or Le Puy Firminy (4th time in a row). But I also hope to be able to run one of the Ultra Tour du Mont-Blanc races in late August in Chamonix (need to be picked up at the bib lottery for this to happen). And probably some other interesting challenges such as running the Paris Marathon dressed as SpongeBob....one week after running the 80km Ecotrail....:-) But you'll see that in my next running update, of course. Merry New Year and Happy Christmas!

8 October 2013

Russ Allbery: Review: Grimspace

Review: Grimspace, by Ann Aguirre
Series: Sirantha Jax #1
Publisher: Ace
Copyright: March 2008
ISBN: 0-441-01599-9
Format: Mass market
Pages: 312
Sirantha Jax is a jumper: a person with the rare genetic ability to jump spaceships through the shortcut dimension of grimspace. It's a science fiction idea with a long pedigree. It creates a separate in-group for the characters to belong to, a society with its own rules, and sets up tension with the rest of the institutions of the world. In Aguirre's version, jumpers can provide the transition but direction in grimspace has to be done by a pilot, and the jumper and the pilot develop a deep psychic bond. Jax had been jumping with her husband piloting, but on their last trip their ship crashed, killing everyone on board except her. As Grimspace opens, Jax is in a treatment facility that's much more like a prison. In theory, they're trying to figure out whether she's still able to (and safe to allow to) jump. In practice, something quite a bit more sinister is going on, a fact that the reader senses early but only learns more about after Jax is helped to escape. Scarred, deeply depressed, and nearly suicidal, she's pulled into the subversive plans of a small ship of renegades who are trying to break open the Corp monopoly on grimspace travel and their tight hold over every jumper. But she's not sure whether she trusts them, or whether she cares enough about their crusade to truly join in. Grimspace follows well-trod SF paths of special cadres of pilots and navigators, lawless corners of space with local warlords, and rebellion against smothering centralization, but its first-person protagonist and point of view have more in common with urban fantasy. It's written in first-person and, at times, nearly stream of consciousness from Jax's perspective, and Jax displays plenty of profanity, acidic commentary, and emotional angst. If her special ability were shapeshifting or magic instead of jumping spaceships, one could easily see her as the typical burned-out urban fantasy detective. I liked that. It's a fun point of view when it's written well, and it felt fresh when applied to a science fiction background instead of the more typical fantasy. That said, Grimspace has a rather rough start. Jax isn't sure what's going on at first and also isn't sure she cares, and I thought that carried over to the reader's experience. The first section of the book is rough and jagged, with staccato bursts of character introduction, world-building that doesn't quite cohere, and an extended stop on a very odd outback planet that felt to me like a repurposed stage from a western. Part of the early problem is that Jax and the reader are being intentionally kept in the dark about the real goal of this group, but part of the problem is also that none of the characters start off as very likable. That includes Jax, who is a complete mess and who is both scared and despairing to the point of being almost nihilistic. The world-building unfortunately does not carry the reader through that part of the story, and at about eighty pages in I wasn't sure I was going to like this book. It does, however, get much better. For one, Jax calms down and starts making emotional connections with the rest of the crew, and that lets her demonstrate skills as a fearless problem solver. For another, while the goal of the group of people she's fallen in with isn't exactly deep, it does make sense and it does slowly become a cause the reader (and Jax) can believe in. The frenetic pace of introduction and discovery also slows down, the cast stabilizes, and the surroundings get a bit less weird (and a bit more conventional for a science fiction novel). I didn't care for the alien baby subplot I rarely do but the rest of the story slowly pulled me in. One thing Aguirre did extremely well was surprise me. There are several points later in the book where Jax gets put in a fairly typical high-stress position and does something very atypical in reaction to it, twisting the story into a shape quite a bit unlike what I had been expecting. That made Jax feel like a true free agent, establishing her independence and her desire to control her own life, and that appealed to me. Aguirre also palms one card exceptionally well, setting up an ending that I thought was the best set of scenes in the whole book, and that had me thoroughly engrossed. One can see the bones of the urban fantasy heroine inside Jax's character, but they're fresh and interesting in an SF world, and they work better than I would have expected. There is a romance, in fitting with the urban fantasy inspiration, and it didn't quite work for me, but Aguirre also builds in a reason why it might not make sense and makes it spiky and fragile without being too cliched. By the end of the book, I was, if not entirely persuaded, at least willing to go along for the ride. And that's the best summary of Grimspace, I think. Parts of it don't make a great deal of sense, and parts of it are quite choppy. But I liked Jax once she starts getting a handle on her trauma, the other characters grew on me, and the plot surprised me in some interesting ways. It's not the smoothest or most polished SF novel I've read, but it has a lot of energy and an unusual genre mix. I think I'll stick along for the ride. Followed by Wanderlust. Rating: 7 out of 10

22 February 2013

Richard Hartmann: Finland II

Second part; bullet points to save time

6 July 2012

Marco Silva: C-d on UNIX

[...] read also says how many bytes of the file were returned, so end of file is assumed when a read says "zero bytes are being returned." [...] When a program reads from your terminal, each input line is given to the program by the kernel only when you type its newline (i.e, press RETURN). [...] [...] Now try something different: type some characters and then a *ctl*-d rather than a RETURN: $ cat -u
123
ctl-d123 cat prints the characters out immediately. *ctl*-d says, "immediately send the characters I have typed to the program that is reading from my terminal." The *ctl*-d itself is not sent to the program, unlike a newline. Now type a second ctl-d, with no other chracters: $ cat -u
123
ctl-d123ctl-d$ The shell responds with a prompt, because cat read no characters, decided that meant end of file, and stopped. ctl-d sends whatever you have typed to the program that is reading from the terminal. If you haven't typed anything, the program will therefore read no characters, and that looks like the end of the file. That is why *ctl*-d logs you out --- the shell sees no more input. Of course, *ctl*-d is usually used to signal an end-of-file but it is interesting that is has a more general function.
Brian W. Kernighan, Rob Pike. The UNIX Programming Environment. Prentice-Haskell Software Series, 1984, section 2.1, pages 44-45.

3 June 2012

Sandro Tosi: How to migrate your entire Google account to a new one

I've been using a Google account since years, but having a "more serious" one along. The time has come to make the switch and elect the other as main one.

Google has a lot of services, and I'm using several of them, so changing the main account requires to migrate data of those services to the new account, given it's not possible to merge two accounts. To know the full list of services your account is signed in, go to your Account Product page. Some migrations are easy, others hell no: so I'm writing this post to keep track of the migration as it's going on.

First of all, there's a really good post about this migration at LifeHacker: it contains a lot of info I'm using here, but not all the services I need. Additionally, Google has a list of services and methods to migrate from one account to another.

Just to be generic, I'll call AccountSRC the original account and AccountDST the one I want to migrate to.

Migrate GMail

GMail is probably the most important service I have, and also the most difficult one to migrate. I have a lot of filters, so the first step is to migrate them:
  1. Login into GMail for AccountSRC
  2. Gear icon > Settings > Labs
  3. Enable the "Filter import/export" plugin
  4. Reload GMail to enable the plugin (if not done automatically)
  5. Gear icon > Settings > Filters
  6. At the bottom of the page, Select all
  7. Export: this will download an XML file with the filters in it
  8. Login into GMail for AccountDST
  9. Enable "Filter import/export" plugin
  10. Load the file containing the saved filters
  11. Import all (or select which one) filters; this will also create automatically the labels defined in the filters.
I currently have a forward rule from AccountDST to AccountSRC: so let's revert the forward direction: from AccountSRC to AccountDST. This will allow the new account to receive the mails sent to the old one.
  1. Login into GMail for AccountDST
  2. Gear icon > Forwarding and POP/IMAP > Disable forwarding: this will stop redirecting mails from AccountDST to AccountSRC
  3. Login into GMail for AccountSRC
  4. Gear icon > Forwarding and POP/IMAP > add a forwarding address
  5. Enter AccountDST as the address to forward mails to
  6. A verification code is sent to AccountDST and you'll need to enter that code in AccountSRC to verify you have access to both mailboxes
  7. Once verified, select "Forward a copy of incoming mail to" to AccountDST and to delete the GMail copy on AccountSRC
There's no automatic way to migrate all your settings from the old GMail account to the new one, so you'll have to comparing settings pages to set what you had before to AccountDST. I suggest to first enable the labs you have on AccountSRC, and then go thru every page and report the configurations.

Did I forget something? yes, the hardest part: migrate mails! There are a lot of guides about migrating mails, the way I prefer is thru IMAP and Thunderbird (as described here): you'll move IMAP folders, and since they are equivalent to GMail labels, you'll automatically get your email with the correct labels (yes, the same mail appears in several folders, one for each label, and moving an email from one folder will not remove it from all the others).
  1. Enable IMAP access on both AccountSRC and AccountDST
  2. Open Thunderbird (or Icedove if you're on Debian like me) and register both accounts
  3. You can now start moving folders/emails from AccountSRC to AccountDST
    1. Copying a (missing) folder, will automatically create a label with the same name
    2. Moving a mail from the same folders on the two accounts will add the label called with the name of the folder to that email
  4. Be ready, it's a LOOOOONG process, but it's safe and it guarantees a perfect result
With the method above you have all the labels you had before, but not the label colors (and I have a lot :( ).

Another tool that's would be interesting to evaluate for this task is Gmvault.


Migrate Contacts

Contacts are available inside GMail:
  1. Login into GMail for AccountSRC
  2. On the combobox on top-left, select Contacts
  3. More > Export
  4. Export all the contacts in Google CSV, since it's specific to be re-imported
  5. Login into GMail for AccountDST
  6. Contacts > More > Import
  7. Select the previously saved file and import it

Migrate Reader

Migrating Reader subscriptions is easy:
  1. Login into Reader for AccountSRC
  2. Reader Settings -> Import/export
  3. Export your subscriptions in OPML format
  4. Now login into Reader for AccountDST
  5. Reader Settings -> Import/export
  6. And import the saved OPML file
Sadly, that doesn't migrate your starred items, but there's a (long & manual) solution here.


Migrate Blogger

Login to Blogger for AccountSRC; then, for each blog you have there do this dance:
  1. Select the blog
  2. Go into Settings > Permissions
  3. Add AccountDST as a new author: this will send an email to AccountDST GMail account
  4. Accept the invitation (this will ask you to create a new Blogger account for AccountDST if you don't already have one)
  5. Go back to to Blogger with AccountSRC
  6. Blog > Settings > Permissions
  7. Grand admin rights to AccountDST
At this point, both AccountSRC and AccountDST have admin rights; you can leave it as it is, or remove AccountSRC and so only AccountDST will be the admin (other shares should be kept unchanged). The same procedure is described in this Google support answer.

What can't be migrated as easily are all the profile settings you've done on AccountSRC. So profile information, blogs followed and so on have to be migrated by hand: boring but they are just few things, doable in a bunch of minutes.


Migrate Android phone

Android phones require a primary Google account to work, and of course my phone uses AccountSRC. Several sources say the only way to change primary account is to factory reset the phone. But since new market app supports multiple accounts and Android 4.0 (Ice Cream Sandwich) not longer needs a primary account, I just waited for the update to ICS to come.

What you really need to do is to add a new account, AccountDST, to Settings > Account & Sync, and all the Google apps on the phone will be able to use the new account (you might need to select what to sync, and the new account from inside each app).


Migrate Calendar

Even with Calendar the settings will have to be migrated by hand: they are just in one page, so just keep two windows opened and keep switching to sync them. The calendars are easy to move over: for the main calendar:
  1. Login into Calendars for AccountSRC
  2. Go into Settings > Calendars tab
  3. Select export calendars: this will download an archive of your calendars in a zip file; you will need to extract all the ics files from that zip archive so you'll have the main calendar readily available
  4. Login into Calendars for AccountDST
  5. Settings > Calendars tab
  6. Import the main calendar
for the other calendars you can do this way:
  1. Login into Calendars for AccountSRC
  2. Settings > Calendars tab
  3. Share the calendar with AccountDST and give it "Make changes AND manage sharing" right - the new account will have full control over that calendar

Migrate Docs

The procedure to migrate documents in Google Docs (now Google Drive) is described here, and it boils down to:
  1. Login into Docs for AccountSRC
  2. Select all the documents
  3. Share them with AccountDST
  4. In the same window select AccountDST as "is owner"
and you've got all the docs as owned by the new account.


Migrate Groups

There's no way to migrate all your Groups subscriptions from one account to another (if not playing with alternate email address, but it was too risky to try it), so you'll have to migrate all of them by hand.

On the other hand, if you've created some groups, that's possible to move them to the new account:
  1. Login into Groups for AccountSRC
  2. Enter in the "manage" area for the groups you've created
  3. Manage members > Invite members
  4. Invite AccountDST
  5. Accept the invitation, clicking on the email sent to AccountDST account
  6. Go back into AccountSRC manage area
  7. Select the AccountDST account
  8. Edit the subscription and set "Membership type" to "owner": now AccountDST is the owner of the list

Migrate Analytics

The process to migrate the account on Google Analytics is similar to Groups:
  1. Login into Analytics for AccountSRC
  2. Select Admin on top-right, then for each account you have
  3. Select the account, select "Users" tab, click on "+ New User"
  4. Enter AccountDST into the "Email Address" field and give that account the "Administrator" role
  5. Save
  6. Now AccountDST is able to access the same data on Analytics as AccountSRC

Migrate Picasa

To transfer your whole Picasa account follow these steps (as also described in this support post):
  1. Login into Picasa for AccountSRC
  2. Click on the gear on top-right > Photo Settings > Privacy and permissions
  3. On the "Migrate account" line click on "Migrate my photos"
  4. Insert AccountDST into the box for destination address
  5. In case you're using picasa to store blogger photos, also select the tick below
  6. At this point, AccountSRC will receive an email: copy the link in a session where you've logged in as AccountDST and then visit the page: that will start the process
  7. Wait until the transfer is completed: you'll receive an email to AccountDST when it's done

Migrate Google+

I'm not exactly a heavy user of G+ but I have some circles I'd like to move over (also because Reader sharing now works with G+ and it's a feature I user a lot), this is a process to move circles and contacts; as stated there, there's no way to move shared items and stuff (sad!).


Migrate AdSense

It's not possible to migrate the current AdSense account: you have to register a new one. When you do, it's possible that Google notice the new account is for the same person of the first one, and will ask to close the old account on AdSense in favor of the new one.


Migrate Alert

Alerts allows to export the list of alerts you've created but it's not possible to import that list in another Google account, so you'll have to recreate them from scratch (I had just a few so it was a matter of 1 minute or so).


Migrate Zoho Notebook

If you still miss Google Notebook (why big-G you let it die? couldn't you open-sourced it? it was so awesome! anyway...), you probably know that there are two main alternatives to it: Evernote and Zoho Notebook. I decided to go with Zoho (I'm not entirely happy with it, but it's running for now), and given it allows to login with Google account, it was currently using AccountSRC, so I need to migrate that too:
  1. Login into Zoho Notebook for AccountSRC
  2. My Account > Profile > Email address
  3. "Add new email" and add AccountDST, it will send a confirmation email
  4. Then "Make Primary" on AccountDST, so now the new account can login into Zoho notebook with the Google info and have all the notes there.

Migrate Bookmarks

Google provides a guide to migrate Bookmarks from one account to another that basically is:
  1. Login into Bookmarks for AccountSRC
  2. Click on Export bookmarks and it will save an HTML file with the bookmarks in it
  3. Import the file in a browser
  4. Use the Google toolbar to import in into AccountDST
That's kinda ugly way, but it seems to work.


Re-enable Web History

If you used History, then you need to re-enable it on the new account:
  1. Login into History for AccountDST
  2. Click on "Turn Web History on" to enable Web History on the new account
but it seems to be impossible to export/import or migrate the history from one account to another, that's a shame: years and years of stats lost.


Migrate Books

Books doesn't allow to migrate the contents between accounts, so you'll have to do it by hand:
  1. Select one book from AccountSRC, copying its link
  2. Go to AccountDST and navigate to the copied link
  3. Add the book to the right category, as it was before.

Migrate Webmaster Tools

Google released a document that describes how to migrate to another account; it's similar to Analytics and Groups migration.

In addition to that, you might also want to become an owner, that will give full control on the sites.


Did I forget some services? Let me know!

Everyone has a different experience with Google products, so in case you want me to let know how you migrated from one account to another for a product I didn't mentioned, just leave a comment and I'll edit the post (to be as complete as possible).

1 June 2012

Rapha&#235;l Hertzog: My Debian Activities in May 2012

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (338.26 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. Dpkg Like last month, I did almost nothing concerning dpkg. This will probably change in June now that the book is out The only thing worth noting is that I have helped Carey Underwood who was trying to diagnose why btrfs was performing so badly when unpacking Debian packages (compared to ext4). Apparently this already resulted in some btrfs improvements. But not as much as what could be hoped. The sync_file_range() calls that dpkg are doing only force the writeback of the underlying data and not of the meta-data. So the numerous fsync() that follow still create many journal transactions that would be better handled as one big transaction. As a proof of this, replacing the fsync() with a sync() brings the performance on par with ext4. (Beware this is my own recollection of the discussion, while it should be close to the truth, it s probably not 100% accurate when speaking of the brtfs behaviour) Packaging I uploaded new versions of smarty-gettext and smarty-validate because they were uninstallable after the removal of smarty. The whole history of smarty in Debian/Ubuntu has been a big FAIL since the start. Once upon a time, there was a smarty package and some plugins. Everything was great except that the files were installed in a way that differs from the upstream recommendations. So Ubuntu changed the path in their version of the package and did not check whether it broke anything else (and it did break all the plugins). Despite the brokenness of the plugins, this divergence survived for years. So several packages that were using Smarty were modified to use dpkg-vendor to use the correct path depending on whether it was built on Debian or Ubuntu. In 2010, Smarty 3.0 has been released and instead of upgrading the smarty package to this version, one of the smarty co-maintainers introduced a smarty3 package that used yet another path (despite the fact that smarty 3 had a mode to be compatible with smarty 2).
At some point, I informed him that he had to handle the migration of users of smarty to smarty3 he acknowledged and then lost interest in smarty ( I m no longer using it ) and did nothing. After some more bitrot, smarty has been forcefully orphaned in August 2011 by a member of the security team. And in March this year, it has been removed from unstable despite the fact that it still had reverse dependencies (usually removals only happen when they impact no other packages, I don t know why this wasn t the case here). At least the brokenness attracted some attention to the situation and Mike Gabriel contacted me about it. I offered him to take over the various packages since they all needed a real maintainer and he accepted. I sponsored his uploads of all smarty related packages (bringing in the latest upstream versions at the same time). In the end, the situation is looking better now, except that there s no migration path from users who rely on smarty in Squeeze. They will discover that they need smarty3 in Wheezy and that the various paths have to be adjusted. It s probably acceptable since the new upstream versions are no longer backwards compatible with smarty 2 The Debian Administrator s Handbook At the start of the month, I was busy preparing the release of the book. I introduced the publican-debian package to unstable, it s a Publican brand (aka a set of CSS and XSL stylesheets to tailor the output of Publican) using the Debian colors and using the Debian logo. This brand is used by the book. I also created the debian-handbook package and setup the public Git repository on alioth.debian.org. I was ready or so I thought. A few hours after the announce, the website became unusable because the numerous visitors were exhausting the maximum number of client connections. And I could not increase the limit due to Apache s memory usage (with PHP and WordPress). We quickly off-loaded most of the static files traffic to another machine and we setup bittorrent. The problem was solved for the short term. Thousands of persons downloaded the ebook and to this date, 135 copies of the paperback have been sold. Then I took a one-week vacation. Even though I had no Internet at the place I was, I wandered in the street to find a Freewifi wifi network (customers of the Free ISP can use those freely) to stay on top of incoming email. We quickly received some bug reports and I dealt with the easy ones (typos and the like) on the fly. When I came back at home, I manually placed 54 lulu orders for the people who opted for the paperback as reward during the fundraising campaign. A bit tedious but it had to be done (if only Lulu supported a way to batch many orders at once ). I also wanted a long term solution to avoid the use of an external host to serve static files (should a new traffic spike arrive ). So I installed nginx as a front-end. It serves static files directly, as well as WordPress pages which have been cached by wp-super-cache. Apache is still here listening on a local port and responding to the remaining queries forwarded by nginx. Once I ll migrate to wheezy, I might completely ditch apache in favor of php5-fpm to handle the PHP pages. Last but not least, I wanted to bootstrap the various translations that people offered to contribute. I wrote some documentation for interested translators and blogged about it. It s shaping up nicely check it out if you re interested to help! Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

25 March 2012

Gregor Herrmann: RC bugs 2012/12

thanks to lucas' last archive rebuild, this week offered some new & easy additional RC bugs :) here's the list:

22 March 2012

Richard Hartmann: Svalbard

Svalbard The things you find when cleaning out a disk; preparing for re-installation of your laptop on a larger disk once the laptop comes back from repair... I thought I had posted this in early January, but apparently not. As it would be a shame to just throw this away, here goes: I am sitting in the Oslo airport, waiting for boarding back home to start. Seeing the sun after a week of darkness still feels strange. Inital trouble It's been a very interesting week, starting with our trip to Oslo in the other direction. We spent New Year's Eve in Oslo, timing our forced overnight stay before reaching Svalbard to coincide with something interesting. The close timing of our travel meant that there was exactly one flight to Oslo we could take. Never having heard of Air Baltic before and finding out that they are a discount airline, my gut feeling told me to be be wary. I refuse to fly with those airlines on principle, not wanting to support their business model while hurting airlines with decent treatment of customers. Unfortunately, I was forced to book with them in this case. As it turns out, my gut was right. More on that below. Our plane was late in landing and my luggage was lost. As it was New Year's Eve, the staff at Oslo airport understandably wanted to be home, not at work. Still, getting them to file a report was tedious and finding out days later that the report was incomplete was, well, not good. The express train from the airport to Oslo central station had closed early without any advance notice or local signs, presumably because of NYE. The gates were simply closed and that's that. We figured out how the bus system worked, got our tickets from a vending machine, saw the one bus to central station drive away, and proceeded to wait in the outside waiting area; at least we had a front spot in the queue. While we don't get cold easily, it was funny to see Norwegians in thin clothes stand around in the biting wind, apparently being comfortable. A young mother with a baby, who didn't anticipate being forced to wait outside at -15C, as opposed to just sitting down in a train, had no warm clothes for her son; something that was fixed by wrapping him in spare clothes from Ilona's luggage. After waiting for about thirty minutes for the next bus to appear, it parked twenty or thirty meters away from the designated parking spot. The rough queue disintegrated and if not for Ilona's leaving me with luggage and backpacks and storming off to fight the masses, said mother with baby and ourselves would have waited for another thirty minutes. More than hundred people were left stranded at the airport to celebrate there or on the bus. Meh. We hurried, by taxi, from central station to hotel to harbour and arrived about five minutes to twelve. NYE itself was nice. We stood on top of the opera house and watched a rather impressive show of fireworks through the thickening mists. Norwegian fireworks pack lot more punch than German ones; you actually feel your clothes shake when they go off. Getting to Longyearbyen The flight from Troms to Longyearbyen had free in-flight Wi-Fi and flying over the edge of satellite coverage demonstrated how far we were from everywhere else rather impressively. Arriving at Longyearbyen, I fixed the lost luggage claim with the help of an extremely nice woman working at the airport. She confirmed that Air Baltic is legally required to reimburse me after an arcane system based on a virtual IMF currency to the tune of about 1.500. That may sound like a lot, but seeing as I had most of my scuba gear with me, that's not even half of what my luggage was worth. I was forced to get by New Years with what I had on my body and went shopping the next day when I could buy at least a few things. I got by with spending about 180 by going for non-fitting bargain bin clothes, wanting to reduce impact on Air Baltic. Shortly after that, my luggage arrived, unannounced. Air Baltic refused to reimburse me even though they are legally required to, again the airport staff confirmed this. But unless I sue in the country of destination, Norway, I won't see any money. Long story short: Avoid Air Baltic if possible. They will break the law to cheat you out of money when they have a reasonable expectation of getting away with it. Update: Yep, seems they got away with it unless I take legal measures. You have been warned. Longyearbyen itself was very nice. We started off with a short, guided taxi tour around the city, seeing literally everything of it as there's not a lot of Longyearbyen to start with. Dogsledding Next day, as noted earlier, we went shopping and spent the "evening" with dogsledding which turned out to be tons of fun. We helped with harnessing the dogs which is somewhat cumbersome as the dogs are so eager to burn off their energy that they want to run all over the place, not being put into a harness and snapped onto the pull-line. Never having been dogsledding before I was a bit wary, but riding over not too rough terrain is almost trivial once you get the hang of it; listen to what the musher in front of you yells and emulate the same shouts with your own dogs. If the dogs become too fast, step onto the brake pad which simply drives spikes into the snow. If the dogs slow down while going uphill, skate with one foot to help them. That's it for speed. As the dogs are following the musher's sled anyway, steering the dogs was not a concern. Fun fact: The musher used a laser pointer to steer his dogs; simple, efficient, and presumably fun. We learned, by demonstration, that sledding dogs can eat snow, take a leak, and take a dump while running at full speed and pretty much all at the same time. The dogs are left either in cages or on long chains far away from the city as they tend to bark and howl a lot. We were surprised to learn that, even when there are seals left to hang dry as dog food nearby, there are no problems with bear attacks. Apparently even the extremely aggressive and hungry one year old males will not go near dogs. Still, while out in the ice and snow, our guide carefully flooded all crevices and cliff bases to root out female bears with young ones early. They hide their children from the wind and they will attack, dogs and all, if we get too close. That's why our musher carried rifle in his sled and flare gun on his body. Snowmobile The next "morning", we drove around by snowmobile. This turned out to be extremely boring as it was a curated trip with over half a dozen snowmobiles in our group; a stark contrast to our two-sled tour the day before. The last time I went snowmobiling, we raced each other up and down a two-star black (i.e. steep, bumpy, and curvy) ski slope, jumping several meters when racing over larger bumps and crossing streets, so riding single file at 30 km/h was... anticlimatic. Again, the guide had rifle and flare gun with him. We spent the afternoon and evening walking around the city. Ice bears, part I As the ice caves and the glacier were still closed, we decided to have another walk around the city. Having planned make it a quick tour, we lost track of time due to lack of sun and ended up walking around for seven hours. A note about that trip.. If you are alone with your wife, unarmed, climbing up a very steep and slippery mountainside over a sheet of ice with deep snow underneath and loose rocks in between, and then start shining around with your flashlight under the stilts of an abandoned mine that looks like in a horror movie, the correct answer to "What are you doing?" is never ever "Looking for ice bears". Even if it's the truth, this is not an acceptable answer. I crawled up the last part on all fours, camera and tripod in hand while Ilona stayed about two dozen meters below the mines' entrance, refusing to go another step towards the mine. As soon I made it up the creaky and shaky, for a lack of a better word, let's call it ladder, she forced me to come down again. Bleh, but I guess I deserved that. Armed photographer Next day, we got final confirmation that we would not be allowed to rent snowmobiles to explore the hinterland on our own and that other for some, and I quote, "crazy Russians", no one would even attempt to cross over to Barentsburg. Thus, we ended up renting a car for the ~20 kilometers of total road length. That turned out to be a great idea as it allowed us to get away from the light and take some very nice long exposures. It was then that I got a rental rifle, as well. There is a law against leaving the town unarmed and I was not about to test my luck too much. Turns out that, as part of Germany's WWII reparations, Norway received our all hand guns and as they still function perfectly when dirty and in cold climate which makes them still popular in Svalbard, today. The Karabiner 98 Kurz which I received is built incredibly well. It's somewhat scary inasmuch every detail is designed to make this weapon ready to fire. If you hurry or are inexperienced, you will end up with an unlocked and loaded rifle after putting in the bolt. Putting the safety in and not chambering a bullet takes conscious effort and knowledge of the weapon. This is in stark contrast to other weapons I had the chance to dissect, which all defaulted to being safe. Even other military weapons such as the AK74 and the M4A1 are inherently more secure, designed to be locked and safe. The Mauser K98... not so much. As an aside, they didn't bother to remove the Reichsadler and Swastika from the rifle, contending themselves with striking out the German registration number and stamping the rifle with a Norwegian one. I guess Norwegians don't really "care" about these signs as much as we Germans do. In a way, that's a good thing I guess, at least as long as it's an indication of indifference towards the sign, not one of forgetting or ignoring the underlying issues. Still, I was very glad to have rented the rifle. While Ilona tended to stay in the running car with heat and lights on, I went out and away from the car. Even when standing near a street, a medium snow storm will make you appreciate the four powerful arguments against being eaten by a random bear which are at the ready over your shoulder. We even went to the shooting range so I could get some practice. The procedure is very trusting, as is anything in Longyearbyen. After accessing the interior of one of the houses in a particular way which I won't specify here, you simply switch on the floodlights, put up the red flag, position a target and write your name into the guest book. Once you're done shooting, toss a few coins into the bowl next to the guest book, remove the targets and flag, turn off the lights, close the door and that's that. Unfortunately, the way in was under a few meters of snow so I couldn't get in any practice shots. Ice bears, part II Later, as I was standing on a wind-polished slate of ice taking pictures of the Seed Vault (located here), I heard an ice bear roar behind (i.e. to the north of) me. I consciously remember hearing the bear, I consciously remember facing the other way around, half-crouched, rifle raised in the direction of the roar. I also consciously remember smacking the safety off and chambering a bullet after having regained control of my body. I do not remember spinning around on a wind-polished slate of ice, so treacherous I hand to balance with my arms and didn't even lift my feet when walking over it, without losing balance or footing, nor do I remember crouching and raising the gun. Evolution really is amazing; no matter what primal chord that roar struck, it certainly saved a ton of people over the years. In my case, thankfully, there was no bear to be seen down the slope. There may be no sun or anything, but the snow reflects the starlight so you tend to see surprisingly far and as I was on on the mountainside and the roar came from down from the coast, and as I had my rifle, I decided to finish the photo session while keeping the slope in close view. In hindsight, I am still glad I decided to do that as the shots came out rather nice. Next day, we drove out to Mine 12, the farthest you can away from Longyearbyen. The dump truck transporting coal alternated between driving a full load of coal back to the city and being its own snowplow. One quick trip to get coal, one slow and empty trip to plow away snow, rinse, repeat. If not for that, our 4WD would never have made it all the way up to the mine. Neither snow storms in North America nor around the Alps prepared me for what people on Longyearbyen consider normal wind in their backyard. This is where the word wind-swept was invented. The main reason that Svalbard is inhabited at all, other than the Gulf Stream, is coal. We have been told that the coal up there is of extremely high quality and while I can't say much about that, I can say that it's hard as stone. This is nice as it does not smear when you get coal all over yourself. Just shake out your jacket and pants and you are good as new. On our way back, we met two locals who had just prepared the glacier and ice caves for tourists. Had I known that in advance, I would have tried to go with them to take pictures completely away from all artificial light. Oh well, can't have everything. Speaking of not being able to have everything, the outdoor hot tub in our hotel which integrated ice bar and BBQ grill was still under several meters of snow so we couldn't use that, either. Finally, the few divers who are in Longyearbyen didn't have time to take me onto a trip while I was there. As I already missed my opportunity to dive the Arctic circle when the one diver on Gr msey happened to be on the mainland and barely missing it by diving Str tan instead, this was kind of a bummer. On the plus side, this has given me a goal to pursue and achieve. Random notes For the rest of my notes, I will resort to a largely unsorted list of bullet points as there's just too much to talk about in prose. All in all, it was well worth it. PS: If you know anyone working with Google Maps, ask them to consider improving their coverage of the Arctic. This is a real pity. As is the rest of the Arctic and the fact that Google Earth cheats you out of the North and South Pole by stretching adjacent tiles into and over them.

10 January 2012

Andrew Pollock: [life] Breaking and entering, with permission

I had a bit of an adventure yesterday, which would have taken some explaining if the police had gotten involved. It went a little something like this... My friend and former co-worker Sara was in the US Virgin Islands for the holidays. Her boyfriend, Karl, flew there separately for the tail end of her time there. Yesterday, I received a phone call from Sara, saying that Karl had managed to fly out to the Virgin Islands without his passport. Apparently you can get there without one, but to get back into the mainland US, you need one. She wanted to know if I could get one of my lock-picking co-workers to break into their apartment and retrieve Karl's passport and mail it them. Karl was supposed to fly out the next day. Attempts by Sara to contact her landlord had failed, so they didn't have many other options (apart from mailing me a key, which would have cost them another day). I asked one of my co-workers, Jason, who I knew was into lock picking, if he was up for it, and he offered to put me in touch with another guy who had dominated the recent lock picking night that he'd run. So now I'm talking to David, who's on board with the mission, but doesn't have his lock picking gear on him. No problem, Jason says he'll lend me his, which was at work with him. So we have a plan. Our friends Ian and Melinda are currently in Australia. They've lent us their car because it's leased, and they have some minimum mileage they're supposed to do and they're under it, so I've been driving to work in their car some days. As it happens, I drove to work in it yesterday. So now David and I set out in a car that neither of us own, with a lock picking set that belongs to another person, to break into an apartment of someone who's in the Virgin Islands. What could possibly go wrong? I'm told that it's not illegal to own a lock picking set, but if you're caught with one on your person and you're not a locksmith, you can get into all sorts of trouble. On top of that I'd have a hard time explaining the car I'm driving. We get to Sara and Karl's condo complex. It has a common gate that visitors would normal get buzzed through. Turns out it's not that hard to climb over. It's got some benign-looking spiky things on top, but I could get a leg over from the left hand side of the gate and jump over without impaling myself. Then I let David in and we proceeded upstairs to Sara and Karl's apartment door, where David set to work. Sara said that just the dead bolt was locked. David started at it with Jason's tools, trying to be as discreet as possible. It was about 3:30pm and there was no one around, but we could hear some noises from the neighbouring apartment (the two front doors were right next to each other). After what felt like about half an hour without success (the last pin of the lock was particularly tricky apparently) David was having to resort to more noisy techniques with the lock, so I decided to take the up-front approach and just inform the next door neighbour what we were doing in case he/she (I think it was a she) decided to call the cops on us. I told her through the door why we were there and what we were doing. She didn't seem to care too much. David then proceeded to start "raking" the lock, essentially brute forcing the pins with a lot of jiggling, and finally managed to pick it and we were in. I quickly found Karl's passport where it was suspected to be, and then we pondered how we were going to lock the door again. We could have just locked the door knob instead of the deadbolt and closed the door behind us, but we weren't sure if Sara and Karl had a key to the doorknob (Sara said they always just locked the deadbolt). Sara was fine with leaving the door unlocked until they got home, but weren't so keen on leaving our fingerprints all over the place and then leaving the door unlocked. David tried to re-pick the deadbolt so that he could lock it via the same means as opened it, and I scouted around for a key. I managed to find a key that locked both the deadbolt and the doorknob, so I took that with us and locked up their apartment. In David's defense, the deadbolt was a bit stiff to lock even with the key. I dropped David back at work, collected my stuff (it was now about 4:30pm) and headed to the UPS Store to ship Karl's passport to him as fast as humanly possible. I just made the 5pm pick up. Today I received an SMS from Sara informing me that they had received the passport. I was very impressed with how fast it got to them. So that was all a bit of an adventure. I'm not sure how much longer Karl is going to have to stay in the Virgin Islands as a result. I'm going to suggest that Sara and Karl leave a spare key with someone in future.

28 December 2011

Guido G nther: GNOME Prepaid Manager 0.0.3

A recent trip to Switzerland made me dig out my prepaid card for UMTS usage again. This resulted in some minor enhancements for Prepaid Manager. The new release handles disabled and missing modems more reliably. It also has some visual feedback if we know the length of the top up code: GNOME Prepaid Manager screenshot This blog is flattr enabled.

10 November 2011

Adnan Hodzic: Linux power regression + overheating problem on ThinkPad [fixed?]

This blog post isn t only directed to ThinkPad owners as most notebook Linux users with Intel Core Duo 1/2 and i3/i5/i7 processors have been affected by this bug if not all. And yes, this problem is present on latest Debian Unstable and Ubuntu 11.10. Prelude I m owner of Thinkpad X300, great machine except the fact that just recently I replaced its 3rd cooling fan! Yea, I do a lot of compiling and it s on all the time, but still this kind of things shouldn t happen. I first linked this problem to the fact that Thinkpad fan on Linux (as of 2.6.22) always works at what s its basically maximum RPM, thus the reason there are numerous fan control scripts. My favorite one is Thinkfan, but controlling fan doesn t really help if you have a overheating problem. For matter of a fact it working on its maximum speed might only help, with its own toll. As of kernel 2.6.38 up until 3.1 (still present) there has been a problem of power regression but besides this I had slight problem with overheating. Regarding overheating in beginning I tried reporting bugs, tried different Thinkfan configurations, blamed proprietary software such as Adobe Flash for spiking up CPU temperature, however this problem was somewhat solved. After numerous battery calibrations and as these didn t work in the end for battery life getting poorer with each day, I just blamed the factor that notebook was getting pretty old (~3 years). Then the consumer woke up inside of me and I thought it was time to get new notebook. I laid my eyes upon ThinkPad X1 thing of beauty except one mayor drawback, its price. I did some reading on X1 and interesting enough, X300 comes with Core Duo 2 L7100 but overheating + power regression was still present even on latest Intel Core I* series. Reading this killed the consumer and woke up the hacker side. Solution Initial workaround to the problems of power regression is to add pcie_aspm=force besides existing GRUB boot arguments, this did help to some point but what really helped in both cases was also adding i915.i915_enable_rc6=1 or at least I thought so since this line only applies to Sandy Bridge (i3/i5/i7) and latter. In the end my /etc/default/grub looks like:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pcie_aspm=force i915.i915_enable_rc6=1"
and make sure you run update-grub after making these changes. Besides the arguments I used there are other ones you can use and for more info please head to Tweaks To Extend The Battery Life Of Intel Linux Notebooks . Results I ended up with such simple tweak are more then satisfactory as I got some ~45 extra minutes of battery life besides the fact that it lowered temperature by some ~10 C. Guess this also gives me extra time until I get a chance to lay my hands on X1 Smile P.S: After I posted this some argued that this is a workaround rather then a fix and folks at [Phoronix] just posted what they call a proper solution to this problem. Also please note although Sandy Bridge users that enable this might sometimes get a video corruption bug, i915_enable_rc6 is still supposed to get enabled by default in 3.2. So logical conclusion you can come up with is unless you re troubled by this problem, you might not want to use this work around/fix at this point. And hope it ll be fixed in future releases of your favorite Linux distribution.

14 October 2011

Rudy Godoy: Dennis M. Ritchie aka dmr

Last weekend Dennis M. Ritchie, creator of the C Programming Language, the UNIX Operating System, The Plan 9 Operating System and many other key contributions to computing, passed away due an extended illness. I learned about it yesterday trough Rob Pike, former dmr s colleague and friend. I m writing this post to honor him and his legacy. While I ve seen people mourn his departure, I ve also noticed most technical people don t really understand why his work is important for today s computing, so I m here to give a refresh. Since I m good with visual tools (mindmaps) I ve made one to describe the extension of dmr s contribution to the world. You can get a glimpse of how this is important. The main topics are their key contributions, the sub-topics are the current technologies that were built upon dmr s work. Click on the image to see a larger resolution version.

The C Programming Language

You can name this a the mother of UNIX, UNIX-like and many programming languages that people use today. The focus on simplicity by giving just a small set of tools is where it s power resides. Most operating systems available, and in development, today are written in C. Most of the services that make the Internet work have been programmed in C. C also influenced other languages such as C++, Objective-C, even Python and Google s Go.
The UNIX operating system
Today s Internet rely in UNIX-like operating systems running crucial services such DNS, webservers, email servers, etc. Nothing of this could be possible without an Operating system that was built with the simplicity and design that it has. The vision/philosophy behind is where it s power resides. write programs that do one thing and do it well . The GNU Project, Linux, the *BSD where conceived on the idea of replicating such invention. Even Windows has some UNIX bits inside.
IPC
This is probably the less-known Dennis Ritchie s contribution. Today is crucial. Web 2.0 would not be able to be AJAXian if there was not IPC. IPC means Interprocess Communication, simply put: make two system processes exchange messages. This foundation was key for concepts such threads, RPC and others. Today s AJAX relies on the RPC concept, for instance.
So, either you are programming the next hot Web 2.0 or just writing a C program for playing with sockets remember there were giants on which shoulders you are standing now. Thank you Dennis, I have the book.
UNIX is very simple, it just needs a genius to understand its simplicity. -Dennis Ritchie

Next.

Previous.